Vision Transformers have shown great promise recently for many vision tasks due to the insightful architecture design and attention mechanism. By revisiting the self-attention responses in Transformers, we empirically observe two interesting issues. First, Vision Transformers present a queryirrelevant behavior at deep layers, where the attention maps exhibit nearly consistent contexts in global scope, regardless of the query patch position (also head-irrelevant). Second, the attention maps are intrinsically sparse, few tokens dominate the attention weights; introducing the knowledge from ConvNets would largely smooth the attention and enhance the performance. Motivated by above observations, we generalize self-attention formulation to abstract a queryirrelevant global context directly and further integrate the global context into convolutions. The resulting model, a Fully Convolutional Vision Transformer (i.e., FCViT), purely consists of convolutional layers and firmly inherits the merits of both attention mechanism and convolutions, including dynamic property, weight sharing, and short- and long-range feature modeling, etc. Experimental results demonstrate the effectiveness of FCViT. With less than 14M parameters, our FCViT-S12 outperforms related work ResT-Lite by 3.7% top1 accuracy on ImageNet-1K. When scaling FCViT to larger models, we still perform better than previous state-of-the-art ConvNeXt with even fewer parameters. FCViT-based models also demonstrate promising transferability to downstream tasks, like object detection, instance segmentation, and semantic segmentation. Codes and models are made available at: https://github.com/ma-xu/FCViT.
translated by 谷歌翻译
Point cloud analysis is challenging due to irregularity and unordered data structure. To capture the 3D geometries, prior works mainly rely on exploring sophisticated local geometric extractors using convolution, graph, or attention mechanisms. These methods, however, incur unfavorable latency during inference, and the performance saturates over the past few years. In this paper, we present a novel perspective on this task. We notice that detailed local geometrical information probably is not the key to point cloud analysis -- we introduce a pure residual MLP network, called PointMLP, which integrates no sophisticated local geometrical extractors but still performs very competitively. Equipped with a proposed lightweight geometric affine module, PointMLP delivers the new state-of-the-art on multiple datasets. On the real-world ScanObjectNN dataset, our method even surpasses the prior best method by 3.3% accuracy. We emphasize that PointMLP achieves this strong performance without any sophisticated operations, hence leading to a superior inference speed. Compared to most recent CurveNet, PointMLP trains 2x faster, tests 7x faster, and is more accurate on ModelNet40 benchmark. We hope our PointMLP may help the community towards a better understanding of point cloud analysis. The code is available at https://github.com/ma-xu/pointMLP-pytorch.
translated by 谷歌翻译
半监督域适应(SSDA)是一种具有挑战性的问题,需要克服1)以朝向域的较差的数据和2)分布换档的方法。不幸的是,由于培训数据偏差朝标标样本训练,域适应(DA)和半监督学习(SSL)方法的简单组合通常无法解决这两个目的。在本文中,我们介绍了一种自适应结构学习方法,以规范SSL和DA的合作。灵感来自多视图学习,我们建议的框架由共享特征编码器网络和两个分类器网络组成,用于涉及矛盾的目的。其中,其中一个分类器被应用于组目标特征以提高级别的密度,扩大了鲁棒代表学习的分类集群的间隙。同时,其他分类器作为符号器,试图散射源功能以增强决策边界的平滑度。目标聚类和源扩展的迭代使目标特征成为相应源点的扩张边界内的封闭良好。对于跨域特征对齐和部分标记的数据学习的联合地址,我们应用最大平均差异(MMD)距离最小化和自培训(ST)将矛盾结构投影成共享视图以进行可靠的最终决定。对标准SSDA基准的实验结果包括Domainnet和Office-Home,展示了我们对最先进的方法的方法的准确性和稳健性。
translated by 谷歌翻译
由于缺乏标签信息,异常检测是机器学习中的基本但具有挑战性的问题。在这项工作中,我们提出了一种新颖而强大的框架,称为SLA $ ^ 2 $ P,用于无监督的异常检测。在从原始数据中提取代表性嵌入后,我们将随机投影应用于特征,并将不同投影转换的特征视为属于不同的伪类。然后,我们在这些转换功能上培训一个分类器网络,以执行自我监督的学习。接下来,我们向变换特征添加对冲扰动,以减少预测标签的软MAX分数,并基于这些扰动特征对分类器的预测不确定性来降低预测标签和设计异常分数。我们的动机是,由于相对较小的数量和分散的异常模式,1)伪标签分类器的培训更集中学习正常数据的语义信息而不是异常数据; 2)正常数据的转换特征比异常的扰动更强大。因此,异常的扰动转化的特征不能良好分类,因此具有比正常样本的异常分数低。在图像,文本和固有的表格基准数据集上进行了广泛的实验,并表明SLA $ ^ 2 $ p实现了最先进的导致无监督的异常检测任务一致。
translated by 谷歌翻译
识别野外(RFIW)的家庭,作为数据挑战,与第16届IEEE国际自动面部和手势识别(FG)一起举行,是一种大规模的多轨视觉亲属识别评估。这是我们第五版RFIW,我们继续努力吸引学者,将专业人士,发布新工作和讨论前景。在本文中,我们总结了今年RFIW三个任务的提交:特别是,我们审查了亲属验证,三对象验证和家庭成员搜索和检索的结果。我们来看看RFIW问题,以及分享当前的努力,并为未来的未来方向提出建议。
translated by 谷歌翻译
当前用于面部识别的模型(FR)中存在人口偏见。我们在野外(BFW)数据集中平衡的面孔是衡量种族和性别亚组偏见的代理,使一个人可以表征每个亚组的FR表现。当单个分数阈值确定样本对是真实还是冒名顶替者时,我们显示的结果是非最佳选择的。在亚组中,性能通常与全球平均水平有很大差异。因此,仅适用于与验证数据相匹配的人群的特定错误率。我们使用新的域适应性学习方案来减轻性能不平衡,以使用最先进的神经网络提取的面部特征。该技术平衡了性能,但也可以提高整体性能。该建议的好处是在面部特征中保留身份信息,同时减少其所包含的人口统计信息。人口统计学知识的去除阻止了潜在的未来偏见被注入决策。由于对个人的可用信息或推断,因此此删除可改善隐私。我们定性地探索这一点;我们还定量地表明,亚组分类器不再从提出的域适应方案的特征中学习。有关源代码和数据描述,请参见https://github.com/visionjo/facerec-bias-bfw。
translated by 谷歌翻译
We show for the first time that large-scale generative pretrained transformer (GPT) family models can be pruned to at least 50% sparsity in one-shot, without any retraining, at minimal loss of accuracy. This is achieved via a new pruning method called SparseGPT, specifically designed to work efficiently and accurately on massive GPT-family models. When executing SparseGPT on the largest available open-source models, OPT-175B and BLOOM-176B, we can reach 60% sparsity with negligible increase in perplexity: remarkably, more than 100 billion weights from these models can be ignored at inference time. SparseGPT generalizes to semi-structured (2:4 and 4:8) patterns, and is compatible with weight quantization approaches.
translated by 谷歌翻译
Most existing text-video retrieval methods focus on cross-modal matching between the visual content of offline videos and textual query sentences. However, in real scenarios, online videos are frequently accompanied by relevant text information such as titles, tags, and even subtitles, which can be utilized to match textual queries. This inspires us to generate associated captions from offline videos to help with existing text-video retrieval methods. To do so, we propose to use the zero-shot video captioner with knowledge of pre-trained web-scale models (e.g., CLIP and GPT-2) to generate captions for offline videos without any training. Given the captions, one question naturally arises: what can auxiliary captions do for text-video retrieval? In this paper, we present a novel framework Cap4Video, which makes use of captions from three aspects: i) Input data: The video and captions can form new video-caption pairs as data augmentation for training. ii) Feature interaction: We perform feature interaction between video and caption to yield enhanced video representations. iii) Output score: The Query-Caption matching branch can be complementary to the original Query-Video matching branch for text-video retrieval. We conduct thorough ablation studies to demonstrate the effectiveness of our method. Without any post-processing, our Cap4Video achieves state-of-the-art performance on MSR-VTT (51.4%), VATEX (66.6%), MSVD (51.8%), and DiDeMo (52.0%).
translated by 谷歌翻译
Objective: Despite numerous studies proposed for audio restoration in the literature, most of them focus on an isolated restoration problem such as denoising or dereverberation, ignoring other artifacts. Moreover, assuming a noisy or reverberant environment with limited number of fixed signal-to-distortion ratio (SDR) levels is a common practice. However, real-world audio is often corrupted by a blend of artifacts such as reverberation, sensor noise, and background audio mixture with varying types, severities, and duration. In this study, we propose a novel approach for blind restoration of real-world audio signals by Operational Generative Adversarial Networks (Op-GANs) with temporal and spectral objective metrics to enhance the quality of restored audio signal regardless of the type and severity of each artifact corrupting it. Methods: 1D Operational-GANs are used with generative neuron model optimized for blind restoration of any corrupted audio signal. Results: The proposed approach has been evaluated extensively over the benchmark TIMIT-RAR (speech) and GTZAN-RAR (non-speech) datasets corrupted with a random blend of artifacts each with a random severity to mimic real-world audio signals. Average SDR improvements of over 7.2 dB and 4.9 dB are achieved, respectively, which are substantial when compared with the baseline methods. Significance: This is a pioneer study in blind audio restoration with the unique capability of direct (time-domain) restoration of real-world audio whilst achieving an unprecedented level of performance for a wide SDR range and artifact types. Conclusion: 1D Op-GANs can achieve robust and computationally effective real-world audio restoration with significantly improved performance. The source codes and the generated real-world audio datasets are shared publicly with the research community in a dedicated GitHub repository1.
translated by 谷歌翻译
We study the task of learning state representations from potentially high-dimensional observations, with the goal of controlling an unknown partially observable system. We pursue a direct latent model learning approach, where a dynamic model in some latent state space is learned by predicting quantities directly related to planning (e.g., costs) without reconstructing the observations. In particular, we focus on an intuitive cost-driven state representation learning method for solving Linear Quadratic Gaussian (LQG) control, one of the most fundamental partially observable control problems. As our main results, we establish finite-sample guarantees of finding a near-optimal state representation function and a near-optimal controller using the directly learned latent model. To the best of our knowledge, despite various empirical successes, prior to this work it was unclear if such a cost-driven latent model learner enjoys finite-sample guarantees. Our work underscores the value of predicting multi-step costs, an idea that is key to our theory, and notably also an idea that is known to be empirically valuable for learning state representations.
translated by 谷歌翻译